53 research outputs found
Deep Tracking: Seeing Beyond Seeing Using Recurrent Neural Networks
This paper presents to the best of our knowledge the first end-to-end object
tracking approach which directly maps from raw sensor input to object tracks in
sensor space without requiring any feature engineering or system identification
in the form of plant or sensor models. Specifically, our system accepts a
stream of raw sensor data at one end and, in real-time, produces an estimate of
the entire environment state at the output including even occluded objects. We
achieve this by framing the problem as a deep learning task and exploit
sequence models in the form of recurrent neural networks to learn a mapping
from sensor measurements to object tracks. In particular, we propose a learning
method based on a form of input dropout which allows learning in an
unsupervised manner, only based on raw, occluded sensor data without access to
ground-truth annotations. We demonstrate our approach using a synthetic dataset
designed to mimic the task of tracking objects in 2D laser data -- as commonly
encountered in robotics applications -- and show that it learns to track many
dynamic objects despite occlusions and the presence of sensor noise.Comment: Published in The Thirtieth AAAI Conference on Artificial Intelligence
(AAAI-16), Video: https://youtu.be/cdeWCpfUGWc, Code:
http://mrg.robots.ox.ac.uk/mrg_people/peter-ondruska
What Makes a Place? Building Bespoke Place Dependent Object Detectors for Robotics
This paper is about enabling robots to improve their perceptual performance
through repeated use in their operating environment, creating local expert
detectors fitted to the places through which a robot moves. We leverage the
concept of 'experiences' in visual perception for robotics, accounting for bias
in the data a robot sees by fitting object detector models to a particular
place. The key question we seek to answer in this paper is simply: how do we
define a place? We build bespoke pedestrian detector models for autonomous
driving, highlighting the necessary trade off between generalisation and model
capacity as we vary the extent of the place we fit to. We demonstrate a
sizeable performance gain over a current state-of-the-art detector when using
computationally lightweight bespoke place-fitted detector models.Comment: IROS 201
Dropout Distillation for Efficiently Estimating Model Confidence
We propose an efficient way to output better calibrated uncertainty scores
from neural networks. The Distilled Dropout Network (DDN) makes standard
(non-Bayesian) neural networks more introspective by adding a new training loss
which prevents them from being overconfident. Our method is more efficient than
Bayesian neural networks or model ensembles which, despite providing more
reliable uncertainty scores, are more cumbersome to train and slower to test.
We evaluate DDN on the the task of image classification on the CIFAR-10 dataset
and show that our calibration results are competitive even when compared to 100
Monte Carlo samples from a dropout network while they also increase the
classification accuracy. We also propose better calibration within the state of
the art Faster R-CNN object detection framework and show, using the COCO
dataset, that DDN helps train better calibrated object detectors
Incremental Adversarial Domain Adaptation for Continually Changing Environments
Continuous appearance shifts such as changes in weather and lighting
conditions can impact the performance of deployed machine learning models.
While unsupervised domain adaptation aims to address this challenge, current
approaches do not utilise the continuity of the occurring shifts. In
particular, many robotics applications exhibit these conditions and thus
facilitate the potential to incrementally adapt a learnt model over minor
shifts which integrate to massive differences over time. Our work presents an
adversarial approach for lifelong, incremental domain adaptation which benefits
from unsupervised alignment to a series of intermediate domains which
successively diverge from the labelled source domain. We empirically
demonstrate that our incremental approach improves handling of large appearance
changes, e.g. day to night, on a traversable-path segmentation task compared
with a direct, single alignment step approach. Furthermore, by approximating
the feature distribution for the source domain with a generative adversarial
network, the deployment module can be rendered fully independent of retaining
potentially large amounts of the related source training data for only a minor
reduction in performance.Comment: International Conference on Robotics and Automation 201
Mutual Alignment Transfer Learning
Training robots for operation in the real world is a complex, time consuming
and potentially expensive task. Despite significant success of reinforcement
learning in games and simulations, research in real robot applications has not
been able to match similar progress. While sample complexity can be reduced by
training policies in simulation, such policies can perform sub-optimally on the
real platform given imperfect calibration of model dynamics. We present an
approach -- supplemental to fine tuning on the real robot -- to further benefit
from parallel access to a simulator during training and reduce sample
requirements on the real robot. The developed approach harnesses auxiliary
rewards to guide the exploration for the real world agent based on the
proficiency of the agent in simulation and vice versa. In this context, we
demonstrate empirically that the reciprocal alignment for both agents provides
further benefit as the agent in simulation can adjust to optimize its behaviour
for states commonly visited by the real-world agent
Addressing Appearance Change in Outdoor Robotics with Adversarial Domain Adaptation
Appearance changes due to weather and seasonal conditions represent a strong
impediment to the robust implementation of machine learning systems in outdoor
robotics. While supervised learning optimises a model for the training domain,
it will deliver degraded performance in application domains that underlie
distributional shifts caused by these changes. Traditionally, this problem has
been addressed via the collection of labelled data in multiple domains or by
imposing priors on the type of shift between both domains. We frame the problem
in the context of unsupervised domain adaptation and develop a framework for
applying adversarial techniques to adapt popular, state-of-the-art network
architectures with the additional objective to align features across domains.
Moreover, as adversarial training is notoriously unstable, we first perform an
extensive ablation study, adapting many techniques known to stabilise
generative adversarial networks, and evaluate on a surrogate classification
task with the same appearance change. The distilled insights are applied to the
problem of free-space segmentation for motion planning in autonomous driving.Comment: In Proceedings of the 2017 IEEE/RSJ International Conference on
Intelligent Robots and Systems (IROS 2017
Probably Unknown: Deep Inverse Sensor Modelling In Radar
Radar presents a promising alternative to lidar and vision in autonomous
vehicle applications, able to detect objects at long range under a variety of
weather conditions. However, distinguishing between occupied and free space
from raw radar power returns is challenging due to complex interactions between
sensor noise and occlusion.
To counter this we propose to learn an Inverse Sensor Model (ISM) converting
a raw radar scan to a grid map of occupancy probabilities using a deep neural
network. Our network is self-supervised using partial occupancy labels
generated by lidar, allowing a robot to learn about world occupancy from past
experience without human supervision. We evaluate our approach on five hours of
data recorded in a dynamic urban environment. By accounting for the scene
context of each grid cell our model is able to successfully segment the world
into occupied and free space, outperforming standard CFAR filtering approaches.
Additionally by incorporating heteroscedastic uncertainty into our model
formulation, we are able to quantify the variance in the uncertainty throughout
the sensor observation. Through this mechanism we are able to successfully
identify regions of space that are likely to be occluded.Comment: 6 full pages, 1 page of reference
Driven to Distraction: Self-Supervised Distractor Learning for Robust Monocular Visual Odometry in Urban Environments
We present a self-supervised approach to ignoring "distractors" in camera
images for the purposes of robustly estimating vehicle motion in cluttered
urban environments. We leverage offline multi-session mapping approaches to
automatically generate a per-pixel ephemerality mask and depth map for each
input image, which we use to train a deep convolutional network. At run-time we
use the predicted ephemerality and depth as an input to a monocular visual
odometry (VO) pipeline, using either sparse features or dense photometric
matching. Our approach yields metric-scale VO using only a single camera and
can recover the correct egomotion even when 90% of the image is obscured by
dynamic, independently moving objects. We evaluate our robust VO methods on
more than 400km of driving from the Oxford RobotCar Dataset and demonstrate
reduced odometry drift and significantly improved egomotion estimation in the
presence of large moving vehicles in urban traffic.Comment: International Conference on Robotics and Automation (ICRA), 2018.
Video summary: http://youtu.be/ebIrBn_nc-
End-to-End Tracking and Semantic Segmentation Using Recurrent Neural Networks
In this work we present a novel end-to-end framework for tracking and
classifying a robot's surroundings in complex, dynamic and only partially
observable real-world environments. The approach deploys a recurrent neural
network to filter an input stream of raw laser measurements in order to
directly infer object locations, along with their identity in both visible and
occluded areas. To achieve this we first train the network using unsupervised
Deep Tracking, a recently proposed theoretical framework for end-to-end space
occupancy prediction. We show that by learning to track on a large amount of
unsupervised data, the network creates a rich internal representation of its
environment which we in turn exploit through the principle of inductive
transfer of knowledge to perform the task of it's semantic classification. As a
result, we show that only a small amount of labelled data suffices to steer the
network towards mastering this additional task. Furthermore we propose a novel
recurrent neural network architecture specifically tailored to tracking and
semantic classification in real-world robotics applications. We demonstrate the
tracking and classification performance of the method on real-world data
collected at a busy road junction. Our evaluation shows that the proposed
end-to-end framework compares favourably to a state-of-the-art, model-free
tracking solution and that it outperforms a conventional one-shot training
scheme for semantic classification
- …